796 research outputs found

    Multimodal Data Fusion and Quantitative Analysis for Medical Applications

    Get PDF
    Medical big data is not only enormous in its size, but also heterogeneous and complex in its data structure, which makes conventional systems or algorithms difficult to process. These heterogeneous medical data include imaging data (e.g., Positron Emission Tomography (PET), Computerized Tomography (CT), Magnetic Resonance Imaging (MRI)), and non-imaging data (e.g., laboratory biomarkers, electronic medical records, and hand-written doctor notes). Multimodal data fusion is an emerging vital field to address this urgent challenge, aiming to process and analyze the complex, diverse and heterogeneous multimodal data. The fusion algorithms bring great potential in medical data analysis, by 1) taking advantage of complementary information from different sources (such as functional-structural complementarity of PET/CT images) and 2) exploiting consensus information that reflects the intrinsic essence (such as the genetic essence underlying medical imaging and clinical symptoms). Thus, multimodal data fusion benefits a wide range of quantitative medical applications, including personalized patient care, more optimal medical operation plan, and preventive public health. Though there has been extensive research on computational approaches for multimodal fusion, there are three major challenges of multimodal data fusion in quantitative medical applications, which are summarized as feature-level fusion, information-level fusion and knowledge-level fusion: • Feature-level fusion. The first challenge is to mine multimodal biomarkers from high-dimensional small-sample multimodal medical datasets, which hinders the effective discovery of informative multimodal biomarkers. Specifically, efficient dimension reduction algorithms are required to alleviate "curse of dimensionality" problem and address the criteria for discovering interpretable, relevant, non-redundant and generalizable multimodal biomarkers. • Information-level fusion. The second challenge is to exploit and interpret inter-modal and intra-modal information for precise clinical decisions. Although radiomics and multi-branch deep learning have been used for implicit information fusion guided with supervision of the labels, there is a lack of methods to explicitly explore inter-modal relationships in medical applications. Unsupervised multimodal learning is able to mine inter-modal relationship as well as reduce the usage of labor-intensive data and explore potential undiscovered biomarkers; however, mining discriminative information without label supervision is an upcoming challenge. Furthermore, the interpretation of complex non-linear cross-modal associations, especially in deep multimodal learning, is another critical challenge in information-level fusion, which hinders the exploration of multimodal interaction in disease mechanism. • Knowledge-level fusion. The third challenge is quantitative knowledge distillation from multi-focus regions on medical imaging. Although characterizing imaging features from single lesions using either feature engineering or deep learning methods have been investigated in recent years, both methods neglect the importance of inter-region spatial relationships. Thus, a topological profiling tool for multi-focus regions is in high demand, which is yet missing in current feature engineering and deep learning methods. Furthermore, incorporating domain knowledge with distilled knowledge from multi-focus regions is another challenge in knowledge-level fusion. To address the three challenges in multimodal data fusion, this thesis provides a multi-level fusion framework for multimodal biomarker mining, multimodal deep learning, and knowledge distillation from multi-focus regions. Specifically, our major contributions in this thesis include: • To address the challenges in feature-level fusion, we propose an Integrative Multimodal Biomarker Mining framework to select interpretable, relevant, non-redundant and generalizable multimodal biomarkers from high-dimensional small-sample imaging and non-imaging data for diagnostic and prognostic applications. The feature selection criteria including representativeness, robustness, discriminability, and non-redundancy are exploited by consensus clustering, Wilcoxon filter, sequential forward selection, and correlation analysis, respectively. SHapley Additive exPlanations (SHAP) method and nomogram are employed to further enhance feature interpretability in machine learning models. • To address the challenges in information-level fusion, we propose an Interpretable Deep Correlational Fusion framework, based on canonical correlation analysis (CCA) for 1) cohesive multimodal fusion of medical imaging and non-imaging data, and 2) interpretation of complex non-linear cross-modal associations. Specifically, two novel loss functions are proposed to optimize the discovery of informative multimodal representations in both supervised and unsupervised deep learning, by jointly learning inter-modal consensus and intra-modal discriminative information. An interpretation module is proposed to decipher the complex non-linear cross-modal association by leveraging interpretation methods in both deep learning and multimodal consensus learning. • To address the challenges in knowledge-level fusion, we proposed a Dynamic Topological Analysis framework, based on persistent homology, for knowledge distillation from inter-connected multi-focus regions in medical imaging and incorporation of domain knowledge. Different from conventional feature engineering and deep learning, our DTA framework is able to explicitly quantify inter-region topological relationships, including global-level geometric structure and community-level clusters. K-simplex Community Graph is proposed to construct the dynamic community graph for representing community-level multi-scale graph structure. The constructed dynamic graph is subsequently tracked with a novel Decomposed Persistence algorithm. Domain knowledge is incorporated into the Adaptive Community Profile, summarizing the tracked multi-scale community topology with additional customizable clinically important factors

    Thin film superfluid optomechanics

    Full text link
    Excitations in superfluid helium represent attractive mechanical degrees of freedom for cavity optomechanics schemes. Here we numerically and analytically investigate the properties of optomechanical resonators formed by thin films of superfluid 4^4He covering micrometer-scale whispering gallery mode cavities. We predict that through proper optimization of the interaction between film and optical field, large optomechanical coupling rates g0>2π×100g_0>2\pi \times 100 kHz and single photon cooperativities C0>10C_0>10 are achievable. Our analytical model reveals the unconventional behaviour of these thin films, such as thicker and heavier films exhibiting smaller effective mass and larger zero point motion. The optomechanical system outlined here provides access to unusual regimes such as g0>ΩMg_0>\Omega_M and opens the prospect of laser cooling a liquid into its quantum ground state.Comment: 18 pages, 6 figure

    Modelling of vorticity, sound and their interaction in two-dimensional superfluids

    Full text link
    Vorticity in two-dimensional superfluids is subject to intense research efforts due to its role in quantum turbulence, dissipation and the BKT phase transition. Interaction of sound and vortices is of broad importance in Bose-Einstein condensates and superfluid helium [1-4]. However, both the modelling of the vortex flow field and of its interaction with sound are complicated hydrodynamic problems, with analytic solutions only available in special cases. In this work, we develop methods to compute both the vortex and sound flow fields in an arbitrary two-dimensional domain. Further, we analyse the dispersive interaction of vortices with sound modes in a two-dimensional superfluid and develop a model that quantifies this interaction for any vortex distribution on any two-dimensional bounded domain, possibly non-simply connected, exploiting analogies with fluid dynamics of an ideal gas and electrostatics. As an example application we use this technique to propose an experiment that should be able to unambiguously detect single circulation quanta in a helium thin film.Comment: 23 pages, 8 figure

    CO2 storage in depleted oil and gas reservoirs: A review

    Get PDF
    Geological storage of CO2 in depleted oil and gas reservoirs is approved due to its advantages, such as strong storage capacity, good sealing performance, and complete infrastructure. This review clarified the existing projects, advantages, significances, influencing factors, mechanisms, and storage potential evaluation procedures of CO2 storage in depleted oil and gas reservoirs. In this review, the storage capability of depleted oil and gas reservoirs has been confirmed, and factors affecting the CO2 storage potential, including geological factors and engineering factors, are concluded. CO2 trapping mechanisms of different storage processes in depleted oil and gas reservoirs are elaborated and divided into three stages. The evaluation stages of CO2 storage potential of depleted oil and gas reservoirs are summarized as basin selection evaluation stage, oil and gas reservoir selection evaluation stage, storage security evaluation using the bowtie method, and storage capacity calculation stage. The calculation accuracy of CO2 storage capacity in depleted oil and gas reservoirs can be optimized by determining the mineralization storage volume and the actual reservoir characteristics of the dissolution storage coefficient numerically. This work intends to provide support for the storage of CO2 by analyzing and studying the geological theory and engineering achievements of CO2 storage in depleted oil and gas reservoirs.Document Type: Invited reviewCited as: Wei, B., Wang, B., Li, X., Aishan, M., Ju, Y. CO2 storage in depleted oil and gas reservoirs: A review. Advances in Geo-Energy Research, 2023, 9(2): 76-93. https://doi.org/10.46690/ager.2023.08.0

    Label Enhanced Event Detection with Heterogeneous Graph Attention Networks

    Full text link
    Event Detection (ED) aims to recognize instances of specified types of event triggers in text. Different from English ED, Chinese ED suffers from the problem of word-trigger mismatch due to the uncertain word boundaries. Existing approaches injecting word information into character-level models have achieved promising progress to alleviate this problem, but they are limited by two issues. First, the interaction between characters and lexicon words is not fully exploited. Second, they ignore the semantic information provided by event labels. We thus propose a novel architecture named Label enhanced Heterogeneous Graph Attention Networks (L-HGAT). Specifically, we transform each sentence into a graph, where character nodes and word nodes are connected with different types of edges, so that the interaction between words and characters is fully reserved. A heterogeneous graph attention networks is then introduced to propagate relational message and enrich information interaction. Furthermore, we convert each label into a trigger-prototype-based embedding, and design a margin loss to guide the model distinguish confusing event labels. Experiments on two benchmark datasets show that our model achieves significant improvement over a range of competitive baseline methods
    • …
    corecore